236 research outputs found

    Topological phase transitions in multi-component superconductors

    Full text link
    We study the phase transition between a trivial and a time-reversal-invariant topological superconductor in a single-band system. By analyzing the interplay of symmetry, topology and energetics, we show that for a generic normal state band structure, the phase transition occurs via extended intermediate phases in which even- and odd-parity pairing components coexist. For inversion-symmetric systems, the coexistence phase spontaneously breaks time-reversal symmetry. For noncentrosymmetric superconductors, the low-temperature intermediate phase is time-reversal breaking, while the high-temperature phase preserves time-reversal symmetry and has topologically protected line nodes. Furthermore, with approximate rotational invariance, the system has an emergent U(1)×U(1)U(1) \times U(1) symmetry, and novel topological defects, such as half vortex lines binding Majorana fermions, can exist. We analytically solve for the dispersion of the Majorana fermion and show that it exhibit small and large velocities at low and high energies. Relevance of our theory to superconducting pyrochlore oxide Cd2_2Re2_2O7_7 and half-Heusler materials is discussed.Comment: 14 pages, 7 figures; to appear on Phys. Rev. Let

    Asset Pricing and Cost of Equity for US Banking Sector by CAPM and TFPM from 1987-2011

    Get PDF
    Although Capital Asset Pricing Model (CAPM), one-factor model, has strong theoretical basis and is easy to use and understand, analysts also consider other alternative models, such as Three Factor Pricing Model (TFPM) developed by Fama and French (1993). Because some differences between actual return and estimated return could be explained by the effect of capital size and book-to-market ratio. The objective of using these two similar but complementary models is to estimate the cost of equity for the US banking sector. In order to do the estimation, we would conduct the estimation of parameters for both individual bank and the whole banking sector

    An improved Siamese network for face sketch recognition

    Get PDF
    Face sketch recognition identifies the face photo from a large face sketch dataset. Some traditional methods are typically used to reduce the modality gap between face photos and sketches and gain excellent recognition rate based on a pseudo image which is synthesized using the corresponded face photo. However, these methods cannot obtain better high recognition rate for all face sketch datasets, because the use of extracted features cannot lead to the elimination of the effect of different modalities' images. The feature representation of the deep convolutional neural networks as a feasible approach for identification involves wider applications than other methods. It is adapted to extract the features which eliminate the difference between face photos and sketches. The recognition rate is high for neural networks constructed by learning optimal local features, even if the input image shows geometric distortions. However, the case of overfitting leads to the unsatisfactory performance of deep learning methods on face sketch recognition tasks. Also, the sketch images are too simple to be used for extracting effective features. This paper aims to increase the matching rate using the Siamese convolution network architecture. The framework is used to extract useful features from each image pair to reduce the modality gap. Moreover, data augmentation is used to avoid overfitting. We explore the performance of three loss functions and compare the similarity between each image pair. The experimental results show that our framework is adequate for a composite sketch dataset. In addition, it reduces the influence of overfitting by using data augmentation and modifying the network structure

    DualFormer: Local-Global Stratified Transformer for Efficient Video Recognition

    Full text link
    While transformers have shown great potential on video recognition with their strong capability of capturing long-range dependencies, they often suffer high computational costs induced by the self-attention to the huge number of 3D tokens. In this paper, we present a new transformer architecture termed DualFormer, which can efficiently perform space-time attention for video recognition. Concretely, DualFormer stratifies the full space-time attention into dual cascaded levels, i.e., to first learn fine-grained local interactions among nearby 3D tokens, and then to capture coarse-grained global dependencies between the query token and global pyramid contexts. Different from existing methods that apply space-time factorization or restrict attention computations within local windows for improving efficiency, our local-global stratification strategy can well capture both short- and long-range spatiotemporal dependencies, and meanwhile greatly reduces the number of keys and values in attention computation to boost efficiency. Experimental results verify the superiority of DualFormer on five video benchmarks against existing methods. In particular, DualFormer achieves 82.9%/85.2% top-1 accuracy on Kinetics-400/600 with ~1000G inference FLOPs which is at least 3.2x fewer than existing methods with similar performance. We have released the source code at https://github.com/sail-sg/dualformer.Comment: Accepted by ECCV 202

    UrbanFM: Inferring Fine-Grained Urban Flows

    Full text link
    Urban flow monitoring systems play important roles in smart city efforts around the world. However, the ubiquitous deployment of monitoring devices, such as CCTVs, induces a long-lasting and enormous cost for maintenance and operation. This suggests the need for a technology that can reduce the number of deployed devices, while preventing the degeneration of data accuracy and granularity. In this paper, we aim to infer the real-time and fine-grained crowd flows throughout a city based on coarse-grained observations. This task is challenging due to two reasons: the spatial correlations between coarse- and fine-grained urban flows, and the complexities of external impacts. To tackle these issues, we develop a method entitled UrbanFM based on deep neural networks. Our model consists of two major parts: 1) an inference network to generate fine-grained flow distributions from coarse-grained inputs by using a feature extraction module and a novel distributional upsampling module; 2) a general fusion subnet to further boost the performance by considering the influences of different external factors. Extensive experiments on two real-world datasets, namely TaxiBJ and HappyValley, validate the effectiveness and efficiency of our method compared to seven baselines, demonstrating the state-of-the-art performance of our approach on the fine-grained urban flow inference problem

    Cross-Lingual Knowledge Editing in Large Language Models

    Full text link
    Knowledge editing aims to change language models' performance on several special cases (i.e., editing scope) by infusing the corresponding expected knowledge into them. With the recent advancements in large language models (LLMs), knowledge editing has been shown as a promising technique to adapt LLMs to new knowledge without retraining from scratch. However, most of the previous studies neglect the multi-lingual nature of some main-stream LLMs (e.g., LLaMA, ChatGPT and GPT-4), and typically focus on monolingual scenarios, where LLMs are edited and evaluated in the same language. As a result, it is still unknown the effect of source language editing on a different target language. In this paper, we aim to figure out this cross-lingual effect in knowledge editing. Specifically, we first collect a large-scale cross-lingual synthetic dataset by translating ZsRE from English to Chinese. Then, we conduct English editing on various knowledge editing methods covering different paradigms, and evaluate their performance in Chinese, and vice versa. To give deeper analyses of the cross-lingual effect, the evaluation includes four aspects, i.e., reliability, generality, locality and portability. Furthermore, we analyze the inconsistent behaviors of the edited models and discuss their specific challenges
    • …
    corecore